fix: watch worktree path for implementation_plan.json changes (#1805)#1842
fix: watch worktree path for implementation_plan.json changes (#1805)#1842
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds cancel-aware, serialized watcher lifecycle and a public getter for watched spec dir in FileWatcher; makes IPC handlers and task execution worktree-aware (prefer worktree for watching, fallback-read worktree implementation_plan.json, re-watch to worktree when appropriate, and persist progress to worktree paths). Changes
Sequence Diagram(s)sequenceDiagram
participant TaskManager
participant IPCHandler
participant FileWatcher
participant FS as Worktree/Project FS
TaskManager->>IPCHandler: exit / persist progress
IPCHandler->>FileWatcher: getWatchedSpecDir(taskId)
FileWatcher-->>IPCHandler: watchedDir or null
alt watchedDir present and in-memory plan absent
IPCHandler->>FS: compute worktreeSpecDir & read implementation_plan.json
FS-->>IPCHandler: implementation_plan.json (found / not found)
alt plan found in worktree
IPCHandler->>FileWatcher: compare watchedDir vs worktreeSpecDir
alt different
IPCHandler->>FileWatcher: watch(taskId, worktreeSpecDir)
FileWatcher-->>IPCHandler: success / error
end
end
end
IPCHandler->>FS: persist phase/progress to chosen spec dir
FS-->>IPCHandler: write ack
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @AndyMik90, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical issue where subtasks were not being generated or updated correctly in the UI due to the file watcher failing to track changes in Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request effectively resolves the issue of the file watcher not tracking implementation_plan.json changes in worktrees. The core of the fix involves introducing getSpecDirForWatcher to correctly determine whether to watch the main project path or the worktree path, and applying this at all file watcher setup locations. The addition of re-watching logic in agent-events-handlers.ts for tasks where the worktree is created post-start is a thoughtful touch that handles an important edge case. The fallback mechanism in the exit handler to read the final plan from the worktree is also a solid defensive addition.
The code is well-structured and the changes are logical. I have a couple of suggestions regarding variable naming consistency in execution-handlers.ts to improve long-term maintainability. Overall, this is a great fix.
| const specsBaseDirForRecovery = getSpecsDir(project.autoBuildPath); | ||
| const specDirForWatcher = getSpecDirForWatcher(project.path, specsBaseDirForRecovery, task.specId); | ||
| fileWatcher.watch(taskId, specDirForWatcher); | ||
|
|
||
| // Check if spec.md exists to determine whether to run spec creation or task execution | ||
| const specFilePath = path.join(specDirForWatcher, AUTO_BUILD_PATHS.SPEC_FILE); | ||
| // Check main project path for spec file (spec is created before worktree) | ||
| const mainSpecDirForRecovery = path.join(project.path, specsBaseDirForRecovery, task.specId); | ||
| const specFilePath = path.join(mainSpecDirForRecovery, AUTO_BUILD_PATHS.SPEC_FILE); |
There was a problem hiding this comment.
For consistency with the TASK_START and TASK_UPDATE_STATUS handlers, it would be clearer to standardize the variable names used here. Using specsBaseDir, watchSpecDir, and mainSpecDir would improve maintainability across the file.
| const specsBaseDirForRecovery = getSpecsDir(project.autoBuildPath); | |
| const specDirForWatcher = getSpecDirForWatcher(project.path, specsBaseDirForRecovery, task.specId); | |
| fileWatcher.watch(taskId, specDirForWatcher); | |
| // Check if spec.md exists to determine whether to run spec creation or task execution | |
| const specFilePath = path.join(specDirForWatcher, AUTO_BUILD_PATHS.SPEC_FILE); | |
| // Check main project path for spec file (spec is created before worktree) | |
| const mainSpecDirForRecovery = path.join(project.path, specsBaseDirForRecovery, task.specId); | |
| const specFilePath = path.join(mainSpecDirForRecovery, AUTO_BUILD_PATHS.SPEC_FILE); | |
| const specsBaseDir = getSpecsDir(project.autoBuildPath); | |
| const watchSpecDir = getSpecDirForWatcher(project.path, specsBaseDir, task.specId); | |
| fileWatcher.watch(taskId, watchSpecDir); | |
| // Check if spec.md exists to determine whether to run spec creation or task execution | |
| // Check main project path for spec file (spec is created before worktree) | |
| const mainSpecDir = path.join(project.path, specsBaseDir, task.specId); | |
| const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE); |
| const taskDescription = task.description || task.title; | ||
| console.warn(`[Recovery] Starting spec creation for: ${task.specId}`); | ||
| agentManager.startSpecCreation(taskId, project.path, taskDescription, specDirForWatcher, task.metadata, baseBranchForRecovery, project.id); | ||
| agentManager.startSpecCreation(taskId, project.path, taskDescription, mainSpecDirForRecovery, task.metadata, baseBranchForRecovery, project.id); |
There was a problem hiding this comment.
To align with the suggested variable name standardization in the previous block, please update mainSpecDirForRecovery to mainSpecDir here as well.
| agentManager.startSpecCreation(taskId, project.path, taskDescription, mainSpecDirForRecovery, task.metadata, baseBranchForRecovery, project.id); | |
| agentManager.startSpecCreation(taskId, project.path, taskDescription, mainSpecDir, task.metadata, baseBranchForRecovery, project.id); |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts`:
- Around line 238-247: The call to fileWatcher.watch(taskId, worktreeSpecDir) is
asynchronous and may reject; update the re-watch block (the logic using
fileWatcher.getWatchedSpecDir, currentWatchDir, worktreeSpecDir,
existsSync(worktreePlanPath), and taskId) to properly handle the returned
Promise instead of fire-and-forget — either await the call inside an async
function or attach a .catch that logs errors (e.g., using console.error or
process logger) so any rejection from internal unwatch()/watch() is caught and
does not create an unhandled promise rejection.
AndyMik90
left a comment
There was a problem hiding this comment.
✅ Auto Claude Review - APPROVED
Status: Ready to Merge
Summary: ### Merge Verdict: ✅ READY TO MERGE
✅ Ready to merge - All checks passing, no blocking issues found.
No blocking issues found
Risk Assessment
| Factor | Level | Notes |
|---|---|---|
| Complexity | Low | Based on lines changed |
| Security Impact | None | Based on security findings |
| Scope Coherence | Good | Based on structural review |
Generated by Auto Claude PR Review
This automated review found no blocking issues. The PR can be safely merged.
Generated by Auto Claude
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts`:
- Around line 238-249: The FileWatcher.watch() call here can race because
watch() is async and updates watcher state after awaiting unwatch(), so change
the call in agent-events-handlers (where fileWatcher.watch(taskId,
worktreeSpecDir) is invoked) to await the promise to serialize updates (i.e.,
await fileWatcher.watch(taskId, worktreeSpecDir).catch(...)); alternatively, if
you prefer fixing internally, modify FileWatcher.watch() so it sets
this.watchers.set(taskId, pendingMarker) synchronously before awaiting
this.unwatch(taskId) to ensure the in-memory state prevents overlapping calls;
reference FileWatcher.watch(), FileWatcher.unwatch(),
fileWatcher.getWatchedSpecDir(taskId) and the current call site in
agent-events-handlers.ts.
In `@apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts`:
- Around line 221-224: fileWatcher.watch(...) is being called fire-and-forget in
execution-handlers (e.g., the call where specsBaseDir is derived via getSpecsDir
and watchSpecDir via getSpecDirForWatcher) which can cause unhandled promise
rejections; update each fileWatcher.watch call (the one using taskId and
watchSpecDir, and the other two calls around the same areas) to append a
.catch(...) that logs the error (use the same logging pattern used in
agent-events-handlers.ts) so any rejection is handled and reported. Ensure you
reference the existing taskId/watchSpecDir variables and the
fileWatcher.watch(...) call sites when adding the .catch.
- Around line 1163-1171: The inner const specsBaseDir declared inside the
autoRestart block duplicates and shadows the outer specsBaseDir computed earlier
via getSpecsDir(project.autoBuildPath); remove the inner declaration and reuse
the outer specsBaseDir (used together with getSpecDirForWatcher, watchSpecDir,
mainSpecDir, and specFilePath) so the code no longer redeclares specsBaseDir and
relies on the single previously computed value.
Review Finding AnalysisFindings Addressed (commit e7875dd)CodeRabbit - Critical: Race condition in FileWatcher.watch() (agent-events-handlers.ts:238-249)
CodeRabbit - Trivial: Missing
CodeRabbit - Trivial: Shadowed
Findings Already ResolvedGemini - Variable naming consistency (execution-handlers.ts:1164-1182)
|
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@apps/frontend/src/main/file-watcher.ts`:
- Around line 32-38: The watch/unwatch race can interleave while awaiting
existing.watcher.close(); update unwatch(taskId) to check a pendingWatches set
and return early if a watch(taskId) is in-flight (e.g., if
(this.pendingWatches.has(taskId)) return;), and ensure watch(taskId) adds taskId
to this.pendingWatches at start and removes it after the async close/create flow
completes so cleanup is coordinated; reference the methods watch(), unwatch(),
the this.watchers map, existing.watcher.close(), and the pendingWatches set when
making the change.
- Around line 42-46: The current check in the file-watcher that calls
this.emit('error', taskId, `Plan file not found: ${planPath}`) when
existsSync(planPath) is false should be changed to silently return instead of
emitting an error to avoid UI noise during normal watch() retry flows; locate
the block that references existsSync and planPath and remove the emit('error',
...) call so the function simply returns when the plan file is missing
(optionally keep a non-error debug/log statement if you need visibility).
In `@apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts`:
- Around line 1174-1176: The inner block re-declares const mainSpecDir shadowing
the outer mainSpecDir; remove the inner re-declaration inside the autoRestart
block and reuse the outer mainSpecDir (which is computed from project.path,
specsBaseDir and task.specId) when constructing specFilePath (using
AUTO_BUILD_PATHS.SPEC_FILE). Ensure you only remove the duplicated const
mainSpecDir = ... line and keep the path.join(mainSpecDir,
AUTO_BUILD_PATHS.SPEC_FILE) usage so no other changes are needed.
| // Check if plan file exists | ||
| if (!existsSync(planPath)) { | ||
| this.emit('error', taskId, `Plan file not found: ${planPath}`); | ||
| return; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, locate the file-watcher.ts file
find . -name "file-watcher.ts" -type fRepository: AndyMik90/Auto-Claude
Length of output: 105
🏁 Script executed:
#!/bin/bash
# Read the file-watcher.ts to understand the context
cd apps/frontend/src/main
wc -l file-watcher.ts
head -80 file-watcher.tsRepository: AndyMik90/Auto-Claude
Length of output: 2543
🏁 Script executed:
#!/bin/bash
# Search for error event listeners on fileWatcher instances
rg -n "fileWatcher\.on\(\s*['\"]error['\"]" --type=ts -C5
rg -n "\.on\(\s*['\"]error['\"]" --type=ts -C5 | head -100Repository: AndyMik90/Auto-Claude
Length of output: 8240
🏁 Script executed:
#!/bin/bash
# Search for where FileWatcher is instantiated and used
rg -n "new FileWatcher|new file-watcher|FileWatcher" --type=ts | head -20Repository: AndyMik90/Auto-Claude
Length of output: 2447
🏁 Script executed:
#!/bin/bash
# Search for 'not-ready' or 'warn' event patterns to see if already used
rg -n "emit\(['\"]warn|emit\(['\"]not-ready|emit\(['\"]error" --type=ts apps/frontend/src/main/Repository: AndyMik90/Auto-Claude
Length of output: 4681
Emitting 'error' for a missing plan file creates unnecessary UI noise during normal retry flow.
The error listener in agent-events-handlers.ts (line 309) sends all 'error' events to the renderer via IPC_CHANNELS.TASK_ERROR, surfacing them in the UI. During normal operation—when the worktree and implementation_plan.json are created after the initial watch() call—this emits alarming error messages even though callers already handle retries silently.
Remove the error emission for missing files and return silently:
Proposed fix
// Check if plan file exists
if (!existsSync(planPath)) {
- this.emit('error', taskId, `Plan file not found: ${planPath}`);
+ // Plan file may not exist yet (e.g. worktree still being created).
+ // Callers handle re-watching, so a silent return avoids noisy error events.
return;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check if plan file exists | |
| if (!existsSync(planPath)) { | |
| this.emit('error', taskId, `Plan file not found: ${planPath}`); | |
| return; | |
| } | |
| // Check if plan file exists | |
| if (!existsSync(planPath)) { | |
| // Plan file may not exist yet (e.g. worktree still being created). | |
| // Callers handle re-watching, so a silent return avoids noisy error events. | |
| return; | |
| } |
🤖 Prompt for AI Agents
In `@apps/frontend/src/main/file-watcher.ts` around lines 42 - 46, The current
check in the file-watcher that calls this.emit('error', taskId, `Plan file not
found: ${planPath}`) when existsSync(planPath) is false should be changed to
silently return instead of emitting an error to avoid UI noise during normal
watch() retry flows; locate the block that references existsSync and planPath
and remove the emit('error', ...) call so the function simply returns when the
plan file is missing (optionally keep a non-error debug/log statement if you
need visibility).
AndyMik90
left a comment
There was a problem hiding this comment.
🤖 Auto Claude PR Review
🟠 Follow-up Review: Needs Revision
🟠 Needs revision - 3 blocking issue(s) require fixes.
Resolution Status
- ✅ Resolved: 0 previous findings addressed
- ❌ Unresolved: 0 previous findings remain
- 🆕 New Issues: 4 new findings in recent changes
Finding Validation
- 🔍 Dismissed as False Positives: 0 findings were re-investigated and found to be incorrect
- ✓ Confirmed Valid: 4 findings verified as genuine issues
- 👤 Needs Human Review: 0 findings require manual verification
🚨 Blocking Issues
- quality: pendingWatches guard silently drops re-watch calls with different specDir
- quality: pendingWatches does not coordinate with unwatch(), allowing watcher leak on quick task exit
- quality: unwatch() promise not handled in non-async exit handler — inconsistent with watch() .catch() pattern
Verdict
All 20 CI checks are passing. No previous findings to verify. The 2 commits address CodeRabbit's critical race condition finding by adding a pendingWatches guard and .catch() handlers to watch() calls — good progress. However, 3 new MEDIUM severity issues were identified and confirmed valid by finding-validator: (1) NEW-001: The pendingWatches guard uses Set keyed only by taskId, silently dropping legitimate re-watch calls that carry a different specDir — this undermines the worktree re-watch logic in agent-events-handlers.ts:246. (2) NEW-002: unwatch() has no awareness of pendingWatches, so if an agent exits quickly during watch() setup, the newly created watcher leaks. This is ironic since the pendingWatches mechanism was specifically introduced to fix race conditions. (3) NEW-003: fileWatcher.unwatch(taskId) at agent-events-handlers.ts:128 lacks .catch(), inconsistent with the .catch() pattern applied to all watch() calls in this same PR. Additionally, 1 LOW severity quality issue (CMT-001: shadowed mainSpecDir). All findings are easily fixable. The race condition fixes are a step in the right direction but need refinement to fully address the concurrency edge cases.
Review Process
Agents invoked: new-code-reviewer, comment-analyzer, finding-validator
This is an AI-generated follow-up review using parallel specialist analysis with finding validation.
Findings (4 selected of 4 total)
🟡 [NEW-001] [MEDIUM] pendingWatches guard silently drops re-watch calls with different specDir
📁 apps/frontend/src/main/file-watcher.ts:27
The pendingWatches guard uses a Set keyed only by taskId, so it cannot distinguish between a redundant duplicate call (same specDir) and a legitimate re-watch call (different specDir from agent-events-handlers.ts:246). If the re-watch fires while initial watch() is still pending (e.g., awaiting watcher.close()), the re-watch is silently discarded, leaving the watcher on the wrong path. Fix: Use a Map<string, string> storing taskId -> specDir, and only skip when specDir matches.
Suggested fix:
Change pendingWatches from Set<string> to Map<string, string> storing taskId -> specDir. Only return early if the pending specDir matches the requested specDir. When a different specDir is requested, allow the call to proceed (it will close the old watcher in the try block).
🟡 [NEW-002] [MEDIUM] pendingWatches does not coordinate with unwatch(), allowing watcher leak on quick task exit
📁 apps/frontend/src/main/file-watcher.ts:27
unwatch() has no awareness of pendingWatches. If unwatch() runs during watch()'s await (agent exits quickly after start): unwatch() closes old watcher and deletes from map, then watch() resumes creating a NEW watcher that is stored but never cleaned up. The leaked file watcher remains open until app shutdown (unwatchAll) or next watch() for the same taskId. Fix: Add a cancellation mechanism so unwatch() can signal watch() to abort after its await points.
Suggested fix:
Add a cancelledWatches Set. In unwatch(), add taskId to cancelledWatches. In watch(), after each await point, check if cancelledWatches.has(taskId) and return early before creating the new watcher. Clean up cancelledWatches in both methods' finally blocks.
🟡 [NEW-003] [MEDIUM] unwatch() promise not handled in non-async exit handler — inconsistent with watch() .catch() pattern
📁 apps/frontend/src/main/ipc-handlers/agent-events-handlers.ts:128
fileWatcher.unwatch(taskId) at line 128 is called in a non-async callback without .catch(). Since unwatch() is async and internally awaits watcher.close(), a rejection produces an unhandled promise rejection. This is inconsistent with the .catch() pattern this PR applied to all watch() calls (agent-events-handlers.ts:246, execution-handlers.ts:224, 729, 1169).
Suggested fix:
Add .catch() handler: fileWatcher.unwatch(taskId).catch((err) => { console.error(`[agent-events-handlers] Failed to unwatch for ${taskId}:`, err); });
🔵 [CMT-001] [LOW] Shadowed mainSpecDir re-declaration in autoRestart block
📁 apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts:1175
const mainSpecDir is declared at line 935 in the outer try block and again at line 1175 inside the nested autoRestart block. Both compute path.join(project.path, specsBaseDir, task.specId) from the same inputs. The inner declaration is redundant and shadows the outer variable which is already in scope.
Suggested fix:
Remove the redundant const mainSpecDir declaration at line 1175 and use the outer variable directly.
This review was generated by Auto Claude.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/frontend/src/main/file-watcher.ts (1)
137-145:⚠️ Potential issue | 🟡 Minor
unwatchAll()doesn't cancel in-flightwatch()calls.If a
watch()is currently awaiting (e.g., duringclose()),unwatchAll()clearsthis.watchersbut the in-flightwatch()will proceed to create and store a new watcher afterunwatchAll()returns. Consider marking all pending task IDs as cancelled:Proposed fix
async unwatchAll(): Promise<void> { + // Cancel any in-flight watch() calls + for (const taskId of this.pendingWatches.keys()) { + this.cancelledWatches.add(taskId); + } const closePromises = Array.from(this.watchers.values()).map(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/main/file-watcher.ts` around lines 137 - 145, unwatchAll() currently clears this.watchers while in-flight watch() calls can still complete and re-insert watchers; fix by marking pending tasks as cancelled before closing so watch() can detect and abort storing a new watcher. Specifically, in unwatchAll() iterate over this.watchers and for each info set a cancellation marker (e.g., info.cancelled = true or add info.taskId to a cancelledTaskIds Set), then await info.watcher.close(), then clear the map and remove/clear cancellation markers; update watch() to check info.cancelled or cancelledTaskIds.has(taskId) before calling this.watchers.set(taskId, ... ) (and clean up the cancelled marker if found) so no watcher is re-registered after unwatchAll() completes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/frontend/src/main/file-watcher.ts`:
- Around line 29-37: The watch() method can be superseded by a later call and
leak the previous watcher; after every await inside watch (notably after
awaiting close() and after creating the new watcher) re-check that
this.pendingWatches.get(taskId) === specDir and bail out if it’s different so
you don't continue and overwrite this.watchers[taskId]; when storing the created
watcher into this.watchers and when clearing this.pendingWatches ensure the
stored specDir still matches to avoid orphaning the original watcher (apply
these guards around the await close() and after watcher creation in the watch()
implementation).
---
Outside diff comments:
In `@apps/frontend/src/main/file-watcher.ts`:
- Around line 137-145: unwatchAll() currently clears this.watchers while
in-flight watch() calls can still complete and re-insert watchers; fix by
marking pending tasks as cancelled before closing so watch() can detect and
abort storing a new watcher. Specifically, in unwatchAll() iterate over
this.watchers and for each info set a cancellation marker (e.g., info.cancelled
= true or add info.taskId to a cancelledTaskIds Set), then await
info.watcher.close(), then clear the map and remove/clear cancellation markers;
update watch() to check info.cancelled or cancelledTaskIds.has(taskId) before
calling this.watchers.set(taskId, ... ) (and clean up the cancelled marker if
found) so no watcher is re-registered after unwatchAll() completes.
---
Duplicate comments:
In `@apps/frontend/src/main/file-watcher.ts`:
- Around line 56-60: The watch() logic in file-watcher.ts currently emits an
'error' when the plan file is missing (existsSync(planPath) false) which creates
needless UI noise; change it to silently return and avoid emitting the 'error'
event — if a logger is available (e.g., this.logger) replace the emit with a
debug-level message like this.logger.debug(`Plan file not found: ${planPath}`)
and then return; otherwise simply return without calling this.emit('error',
...). Ensure you update the code around the existsSync(planPath) check and
remove the this.emit('error', taskId, ...) call.
AndyMik90
left a comment
There was a problem hiding this comment.
🤖 Auto Claude PR Review
🟠 Follow-up Review: Needs Revision
🟠 Needs revision - 7 blocking issue(s) require fixes.
Resolution Status
- ✅ Resolved: 4 previous findings addressed
- ❌ Unresolved: 0 previous findings remain
- 🆕 New Issues: 7 new findings in recent changes
Finding Validation
- 🔍 Dismissed as False Positives: 0 findings were re-investigated and found to be incorrect
- ✓ Confirmed Valid: 5 findings verified as genuine issues
- 👤 Needs Human Review: 0 findings require manual verification
🚨 Blocking Issues
- Branch Out of Date: PR branch is behind the base branch and needs to be updated
- quality: Concurrent watch() calls with different specDir for same taskId can leak a watcher
- quality: unwatchAll() does not cancel in-flight watch() calls
- quality: Unhandled promise from unwatch() in TASK_STOP handler
- quality: Unhandled promise from unwatch() in TASK_RECOVER_STUCK handler
- quality: [FROM COMMENTS] CodeRabbit: Race condition with concurrent watch() calls for different specDirs (duplicate of NEW-RACE-001)
- quality: [FROM COMMENTS] CodeRabbit: unwatchAll() doesn't cancel in-flight watch() calls (duplicate of NEW-UNWATCHALL-002)
Verdict
All 20 CI checks are passing. All 4 previous findings from the last review have been properly resolved: pendingWatches now uses Map to distinguish specDirs, cancelledWatches mechanism prevents watcher leaks from unwatch() during in-flight watch(), .catch() was added to unwatch() in agent-events-handlers.ts, and the mainSpecDir shadowing was removed. Good progress.
However, 5 new validated findings remain:
-
HIGH: Concurrent watch() race (NEW-RACE-001) - Two concurrent watch() calls with different specDirs for the same taskId can orphan a watcher. Both CodeRabbit and our reviewer independently confirmed this. The PR's own re-watch logic creates conditions for this race. Fix: add a supersession check after await.
-
MEDIUM: unwatchAll() missing cancellation (NEW-UNWATCHALL-002) - unwatchAll() was not updated with the cancellation pattern. Fix: iterate pendingWatches.keys() and add to cancelledWatches.
-
MEDIUM: Unhandled promise in TASK_STOP (NEW-STOP-003) - fileWatcher.unwatch(taskId) at line 306 has no .catch(). Fix: add .catch().
-
MEDIUM: Unhandled promise in recovery (NEW-RECOVER-004) - fileWatcher.unwatch(taskId) at line 1086 has no .catch(). Fix: await or add .catch().
-
LOW: Double-close on same watcher (NEW-DBLCLOSE-005) - Benign in practice but relies on undocumented chokidar behavior.
The HIGH race condition and the 3 MEDIUM issues block merge. All fixes are straightforward (1-3 lines each).
Review Process
Agents invoked: resolution-verifier, new-code-reviewer, comment-analyzer, finding-validator
This is an AI-generated follow-up review using parallel specialist analysis with finding validation.
Findings (7 selected of 7 total)
🟠 [NEW-RACE-001] [HIGH] Concurrent watch() calls with different specDir for same taskId can leak a watcher
📁 apps/frontend/src/main/file-watcher.ts:34
When two watch() calls run concurrently for the same taskId but different specDir values, the pendingWatches guard allows both through (different specDirs). After await existing.watcher.close(), whichever call resumes second deletes the other's just-created watcher from the map WITHOUT closing it, orphaning an active file watcher. The PR's own re-watch logic in agent-events-handlers.ts creates conditions where this race is possible.
Suggested fix:
After the await on line 44, add a supersession check: if (this.pendingWatches.get(taskId) !== specDir) { return; } -- this detects that a newer watch() call has superseded this one and bails out.
🟡 [NEW-UNWATCHALL-002] [MEDIUM] unwatchAll() does not cancel in-flight watch() calls
📁 apps/frontend/src/main/file-watcher.ts:137
unwatchAll() closes all current watchers and clears the map but does not add pending taskIds to cancelledWatches. An in-flight watch() call will resume after unwatchAll() completes and create a new watcher that survives the cleanup. The unwatch() method was updated with the cancellation pattern but unwatchAll() was missed.
Suggested fix:
Add at the start of unwatchAll(): for (const taskId of this.pendingWatches.keys()) { this.cancelledWatches.add(taskId); }
🟡 [NEW-STOP-003] [MEDIUM] Unhandled promise from unwatch() in TASK_STOP handler
📁 apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts:306
In the TASK_STOP handler (line 304-306), fileWatcher.unwatch(taskId) is called in a non-async callback without .catch(). The same PR added .catch() to other call sites but missed this one. A rejection from watcher.close() produces an unhandled promise rejection.
Suggested fix:
Add .catch(): fileWatcher.unwatch(taskId).catch((err) => { console.error('[TASK_STOP] Failed to unwatch:', err); });
🟡 [NEW-RECOVER-004] [MEDIUM] Unhandled promise from unwatch() in TASK_RECOVER_STUCK handler
📁 apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts:1086
In the recovery block, fileWatcher.unwatch(taskId) at line 1086 is called without await and without .catch(). Although inside an async handler with try/catch, the lack of await means any rejection becomes a floating unhandled promise.
Suggested fix:
Either await the call: await fileWatcher.unwatch(taskId); or add .catch() handler.
🔵 [NEW-DBLCLOSE-005] [LOW] Double-close of same FSWatcher when unwatch() runs during watch()'s await
📁 apps/frontend/src/main/file-watcher.ts:44
When unwatch() is called while watch() is awaiting existing.watcher.close() at line 44, both methods call .close() on the same FSWatcher instance. The cancelledWatches check at line 49 correctly prevents creating a new watcher, but the double-close has already occurred. Chokidar likely handles this gracefully in practice.
Suggested fix:
In unwatch(), when pendingWatches has the taskId, skip closing the existing watcher and just set the cancellation flag then return.
🟠 [CMT-RACE-001] [HIGH] [FROM COMMENTS] CodeRabbit: Race condition with concurrent watch() calls for different specDirs (duplicate of NEW-RACE-001)
📁 apps/frontend/src/main/file-watcher.ts:34
CodeRabbit flagged the same race condition as NEW-RACE-001. When Call A and Call B enter watch() with different specDirs for the same taskId, Call A's watcher gets orphaned when Call B overwrites it.
🟡 [CMT-UNWATCHALL-002] [MEDIUM] [FROM COMMENTS] CodeRabbit: unwatchAll() doesn't cancel in-flight watch() calls (duplicate of NEW-UNWATCHALL-002)
📁 apps/frontend/src/main/file-watcher.ts:137
CodeRabbit flagged the same issue as NEW-UNWATCHALL-002. unwatchAll() clears this.watchers but doesn't add pending taskIds to cancelledWatches, allowing in-flight watch() calls to create new watchers after cleanup.
This review was generated by Auto Claude.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/frontend/src/main/file-watcher.ts`:
- Around line 119-121: The finally block currently unconditionally calls
this.pendingWatches.delete(taskId) and this.cancelledWatches.delete(taskId),
which removes entries belonging to superseding calls; change it to only remove
the map entry if it still belongs to the current task (compare the stored value
to the current call's identifier—e.g., the local specDir or the per-call token
used earlier in the watch function) before deleting. Specifically, update the
finally in the async watcher routine that references taskId to: check
this.pendingWatches.get(taskId) === specDir (or === currentTaskToken) before
deleting, and do the same conditional check for this.cancelledWatches so you
don't wipe out entries from newer/superseding calls.
In `@apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts`:
- Around line 1177-1180: The comment incorrectly states that mainSpecDir is
declared in the outer try block; update the comment to accurately state that
mainSpecDir is declared earlier/outside the try block (it lives before the try),
so change the remark near the specFilePath calculation to reflect that
mainSpecDir is defined outside the try scope (reference variable name
mainSpecDir and AUTO_BUILD_PATHS.SPEC_FILE to locate the line).
---
Duplicate comments:
In `@apps/frontend/src/main/file-watcher.ts`:
- Around line 63-67: The code in watch() currently emits an 'error' event when
implementation_plan.json (planPath) is missing — this surfaces a TASK_ERROR to
the UI during normal re-watch cycles; change the behavior so that instead of
this.emit('error', taskId, `Plan file not found: ${planPath}`) the method
performs a silent return (or emits a debug/log event) and exits early. Locate
the check using existsSync(planPath) and replace the error emission with a
non-alarming fallback (e.g., this.emit('debug'..., or a logger.debug with the
planPath) and then return), leaving other logic unchanged.
| } finally { | ||
| this.pendingWatches.delete(taskId); | ||
| this.cancelledWatches.delete(taskId); |
There was a problem hiding this comment.
Bug: finally block unconditionally deletes pendingWatches entry, breaking superseded calls.
When call A (specDirA) is superseded by call B (specDirB), A correctly bails at line 51, but its finally block deletes pendingWatches[taskId] — wiping out B's entry. When B then resumes and checks line 51, it sees undefined !== specDirB and also bails. Neither call creates a watcher.
The finally block should only clean up its own entry:
Proposed fix
} finally {
- this.pendingWatches.delete(taskId);
- this.cancelledWatches.delete(taskId);
+ // Only clean up our own pending entry — a newer watch() may have overwritten it.
+ if (this.pendingWatches.get(taskId) === specDir) {
+ this.pendingWatches.delete(taskId);
+ }
+ // Only clear cancellation if we're still the active pending call.
+ // If another call superseded us, it needs the flag intact.
+ if (!this.pendingWatches.has(taskId)) {
+ this.cancelledWatches.delete(taskId);
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/frontend/src/main/file-watcher.ts` around lines 119 - 121, The finally
block currently unconditionally calls this.pendingWatches.delete(taskId) and
this.cancelledWatches.delete(taskId), which removes entries belonging to
superseding calls; change it to only remove the map entry if it still belongs to
the current task (compare the stored value to the current call's
identifier—e.g., the local specDir or the per-call token used earlier in the
watch function) before deleting. Specifically, update the finally in the async
watcher routine that references taskId to: check this.pendingWatches.get(taskId)
=== specDir (or === currentTaskToken) before deleting, and do the same
conditional check for this.cancelledWatches so you don't wipe out entries from
newer/superseding calls.
| // Check if spec.md exists to determine whether to run spec creation or task execution | ||
| const specFilePath = path.join(specDirForWatcher, AUTO_BUILD_PATHS.SPEC_FILE); | ||
| // Check main project path for spec file (spec is created before worktree) | ||
| // mainSpecDir is declared in the outer try block above | ||
| const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Inaccurate comment: mainSpecDir is declared outside the try block, not inside it.
mainSpecDir is declared at Line 937, before the try block that starts at Line 951. The comment on Line 1179 says "declared in the outer try block above" which is misleading.
📝 Fix the comment
// Check if spec.md exists to determine whether to run spec creation or task execution
// Check main project path for spec file (spec is created before worktree)
- // mainSpecDir is declared in the outer try block above
+ // mainSpecDir is declared in the outer scope above (line ~937)
const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check if spec.md exists to determine whether to run spec creation or task execution | |
| const specFilePath = path.join(specDirForWatcher, AUTO_BUILD_PATHS.SPEC_FILE); | |
| // Check main project path for spec file (spec is created before worktree) | |
| // mainSpecDir is declared in the outer try block above | |
| const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE); | |
| // Check if spec.md exists to determine whether to run spec creation or task execution | |
| // Check main project path for spec file (spec is created before worktree) | |
| // mainSpecDir is declared in the outer scope above (line ~937) | |
| const specFilePath = path.join(mainSpecDir, AUTO_BUILD_PATHS.SPEC_FILE); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts` around lines
1177 - 1180, The comment incorrectly states that mainSpecDir is declared in the
outer try block; update the comment to accurately state that mainSpecDir is
declared earlier/outside the try block (it lives before the try), so change the
remark near the specFilePath calculation to reflect that mainSpecDir is defined
outside the try scope (reference variable name mainSpecDir and
AUTO_BUILD_PATHS.SPEC_FILE to locate the line).
| // the initial watch fell back to the main project spec dir, but now the worktree | ||
| // is available and implementation_plan.json is being written there. | ||
| const currentWatchDir = fileWatcher.getWatchedSpecDir(taskId); | ||
| if (currentWatchDir && currentWatchDir !== worktreeSpecDir && existsSync(worktreePlanPath)) { |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
AndyMik90
left a comment
There was a problem hiding this comment.
🤖 Auto Claude PR Review
🟠 Follow-up Review: Needs Revision
🟠 Needs revision - 3 blocking issue(s) require fixes.
Resolution Status
- ✅ Resolved: 7 previous findings addressed
- ❌ Unresolved: 0 previous findings remain
- 🆕 New Issues: 5 new findings in recent changes
Finding Validation
- 🔍 Dismissed as False Positives: 2 findings were re-investigated and found to be incorrect
- ✓ Confirmed Valid: 4 findings verified as genuine issues
- 👤 Needs Human Review: 0 findings require manual verification
🚨 Blocking Issues
- quality: finally block unconditionally clears pendingWatches, breaking superseding concurrent watch() calls
- quality: Concurrent watch() calls double-close the same FSWatcher
- quality: finally block unconditionally clears cancelledWatches flag intended for concurrent call
Verdict
All 20 CI checks are passing. All 7 previous findings from the last review have been resolved — the pendingWatches/cancelledWatches mechanism successfully addresses the original race conditions, unhandled promises, and double-close issues. However, CodeRabbit and our own new-code-reviewer independently identified a remaining HIGH-severity bug in the finally block of watch(): it unconditionally deletes pendingWatches[taskId], which wipes out a superseding concurrent call's entry, causing BOTH calls to bail out and no watcher being created (NEW-001, confirmed valid). The same unconditional cleanup in finally also clears cancelledWatches entries intended for concurrent calls (NEW-003, confirmed valid). Both share the same root cause and the same fix: make the finally block conditional — only clean up if this.pendingWatches.get(taskId) === specDir. 2 findings were dismissed as false positives (NEW-004 followed existing codebase patterns, NEW-005 is prevented by the pendingWatches guard). 1 LOW finding (CMT-002) is a trivial misleading comment. The fix for the blocking issues (NEW-001 + NEW-003) is a one-line change to the finally block.
Review Process
Agents invoked: resolution-verifier, new-code-reviewer, comment-analyzer, finding-validator
This is an AI-generated follow-up review using parallel specialist analysis with finding validation.
Findings (5 selected of 5 total)
🟠 [NEW-001] [HIGH] finally block unconditionally clears pendingWatches, breaking superseding concurrent watch() calls
📁 apps/frontend/src/main/file-watcher.ts:119
When Call A (specDirA) is superseded by Call B (specDirB), Call A correctly detects supersession at line 51 and bails. However, Call A's finally block unconditionally calls this.pendingWatches.delete(taskId) at line 120, wiping out Call B's entry. When Call B resumes and checks pendingWatches.get(taskId) at line 51, it finds undefined !== specDirB and also bails. Neither call creates a watcher, leaving the task unwatched.
Suggested fix:
Make the finally block conditional: if (this.pendingWatches.get(taskId) === specDir) { this.pendingWatches.delete(taskId); } — this ensures only the 'owning' call cleans up the entry.
🟡 [NEW-002] [MEDIUM] Concurrent watch() calls double-close the same FSWatcher
📁 apps/frontend/src/main/file-watcher.ts:42
When two concurrent watch() calls for the same taskId both enter the try block, both capture the same existing watcher reference from this.watchers.get(taskId) before either deletes it (the delete is after the await at line 45). Both then call existing.watcher.close() on the same FSWatcher instance. While chokidar's close() is generally idempotent, this is unnecessary work and a code smell.
Suggested fix:
This is a secondary consequence of the concurrency model. Fixing NEW-001 (conditional finally) reduces the likelihood of this path being exercised. Alternatively, delete the watcher from the map BEFORE the await, though this requires careful consideration of the ordering.
🟡 [NEW-003] [MEDIUM] finally block unconditionally clears cancelledWatches flag intended for concurrent call
📁 apps/frontend/src/main/file-watcher.ts:121
Same root cause as NEW-001. The finally block at line 121 unconditionally clears cancelledWatches for the taskId. If unwatch() is called while Call B owns the pendingWatches entry (setting cancelledWatches for Call B), Call A's finally clears the flag before Call B can check it. Call B then creates a watcher that should have been cancelled.
Suggested fix:
Make cancelledWatches cleanup conditional: if (!this.pendingWatches.has(taskId)) { this.cancelledWatches.delete(taskId); } — only clean up cancellation if no other call owns the pending slot.
🔵 [CMT-002] [LOW] Misleading comment about mainSpecDir declaration scope
📁 apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts:1179
Comment says 'mainSpecDir is declared in the outer try block above' but mainSpecDir is actually declared at line 937 in the handler function body, before the try block at line 951. It is not inside any try block.
Suggested fix:
Change comment to: '// mainSpecDir is declared earlier in the handler scope (line 937)'
🔵 [CMT-SENTRY-001] [LOW] [FROM COMMENTS] Sentry bot concern: failed initial watch() prevents re-watch (DISMISSED)
📁 apps/frontend/src/main/file-watcher.ts:64
Sentry bot flagged that if initial watch() fails because plan file is missing, subsequent re-watch is prevented. This is a false positive: the finally block properly cleans up pendingWatches, so subsequent watch() calls for the same taskId are not blocked. The guard at lines 34-36 only blocks if pendingSpecDir === specDir AND pendingSpecDir is defined, neither of which is true after cleanup.
This review was generated by Auto Claude.
AndyMik90
left a comment
There was a problem hiding this comment.
🤖 Auto Claude PR Review
🟠 Follow-up Review: Needs Revision
🟠 Needs revision - 1 blocking issue(s) require fixes.
Resolution Status
- ✅ Resolved: 5 previous findings addressed
- ❌ Unresolved: 0 previous findings remain
- 🆕 New Issues: 4 new findings in recent changes
Finding Validation
- 🔍 Dismissed as False Positives: 2 findings were re-investigated and found to be incorrect
- ✓ Confirmed Valid: 4 findings verified as genuine issues
- 👤 Needs Human Review: 0 findings require manual verification
🚨 Blocking Issues
- quality: No test coverage for new concurrency mechanisms (pendingWatches, cancelledWatches)
Verdict
All 20 CI checks are passing ✅. All 5 previous findings from the last review have been resolved — the pendingWatches/cancelledWatches ownership-checking mechanism correctly addresses the three blocking race conditions (NEW-001, NEW-002, NEW-003), and both low-severity findings (CMT-002, CMT-SENTRY-001) are also resolved. Additionally, 2 of 6 new findings were dismissed as false positives after validation.
However, 1 confirmed MEDIUM-severity finding remains: the PR introduces complex concurrency mechanisms (pendingWatches deduplication, cancelledWatches cancellation, supersession detection) but has zero test coverage for any of these concurrent scenarios (NEW-006). All existing tests are sequential. Given that this PR's core purpose is fixing race conditions, testing the concurrent paths is essential for regression prevention. Three LOW findings also remain: dead code branch in finally block (NEW-002), getSpecDirForWatcher missing plan file check (NEW-004), and unwatchAll() not clearing pendingWatches (NEW-005). The code quality is high overall — the concurrency design is sound — but the test gap for the most critical logic warrants revision.
Review Process
Agents invoked: resolution-verifier, new-code-reviewer, comment-analyzer, finding-validator
This is an AI-generated follow-up review using parallel specialist analysis with finding validation.
Findings (4 selected of 4 total)
🔵 [NEW-002] [LOW] Dead code branch in finally block: inner condition is always true
📁 apps/frontend/src/main/file-watcher.ts:132
After this.pendingWatches.delete(taskId) at line 128, the check if (!this.pendingWatches.has(taskId)) at line 132 is always true since we just deleted the only possible entry for that key. The inner if-condition provides no actual protection and is dead code. The cancelledWatches.delete() at line 133 always executes when the outer condition is met.
Suggested fix:
Simplify to remove the dead branch, or add a comment explaining it as defensive coding:
```typescript
if (this.pendingWatches.get(taskId) === specDir) {
this.pendingWatches.delete(taskId);
this.cancelledWatches.delete(taskId);
}
#### 🔵 [NEW-004] [LOW] getSpecDirForWatcher returns worktree path without verifying plan file exists
📁 `apps/frontend/src/main/ipc-handlers/task/execution-handlers.ts:89`
The helper prefers worktree path if findTaskWorktree() returns a path, but doesn't check if implementation_plan.json exists there. When a task starts with a worktree but no plan file yet, watch() emits a 'Plan file not found' error to the UI — unnecessary noise that resolves once the plan is created and re-watch triggers.
**Suggested fix:**
Check for plan file existence before preferring worktree path:
function getSpecDirForWatcher(projectPath, specsBaseDir, specId) {
const worktreePath = findTaskWorktree(projectPath, specId);
if (worktreePath) {
const worktreeSpecDir = path.join(worktreePath, specsBaseDir, specId);
if (existsSync(path.join(worktreeSpecDir, 'implementation_plan.json'))) {
return worktreeSpecDir;
}
}
return path.join(projectPath, specsBaseDir, specId);
}
#### 🔵 [NEW-005] [LOW] unwatchAll() does not clear pendingWatches, leaving stale entries that could block new watch() calls
📁 `apps/frontend/src/main/file-watcher.ts:160`
unwatchAll() adds all pending taskIds to cancelledWatches but never clears pendingWatches itself. During the await Promise.all() for closing watchers, a new watch() call for the same taskId+specDir could interleave and find the stale pendingWatches entry, returning early without creating a watcher. The original in-flight call would also bail due to cancellation, resulting in no watcher being created.
**Suggested fix:**
Add this.pendingWatches.clear() after the cancellation loop:
for (const taskId of this.pendingWatches.keys()) {
this.cancelledWatches.add(taskId);
}
this.pendingWatches.clear();
#### 🟡 [NEW-006] [MEDIUM] No test coverage for new concurrency mechanisms (pendingWatches, cancelledWatches)
📁 `apps/frontend/src/main/__tests__/integration/file-watcher.test.ts:1`
The core purpose of this PR is fixing race conditions, but none of the concurrent scenarios are tested. All 12 existing tests are sequential. Missing tests include: concurrent watch() calls with same specDir (deduplication), concurrent watch() calls with different specDirs (supersession), unwatch() during in-flight watch(), unwatchAll() with pending watches, and getWatchedSpecDir().
**Suggested fix:**
Add tests for key concurrency scenarios: concurrent watch() deduplication, unwatch() during in-flight watch(), and unwatchAll() with pending watches.
---
*This review was generated by Auto Claude.*
AndyMik90
left a comment
There was a problem hiding this comment.
✅ Auto Claude Review - APPROVED
Status: Ready to Merge
Summary: ### Merge Verdict: ✅ READY TO MERGE
✅ Ready to merge - All checks passing, no blocking issues found.
No blocking issues found
Risk Assessment
| Factor | Level | Notes |
|---|---|---|
| Complexity | High | Based on lines changed |
| Security Impact | None | Based on security findings |
| Scope Coherence | Good | Based on structural review |
Generated by Auto Claude PR Review
This automated review found no blocking issues. The PR can be safely merged.
Generated by Auto Claude
The FileWatcher was always watching the main project's spec directory for implementation_plan.json changes. When tasks run in a worktree, the backend writes the plan file to the worktree directory instead, so the watcher never detected changes and subtask progress was never sent to the UI. Changes: - Add getSpecDirForWatcher() helper that checks worktree path first - Update all 3 file watcher setup locations (TASK_START, TASK_UPDATE_STATUS auto-start, TASK_RECOVER_STUCK auto-restart) to use worktree-aware paths - Add re-watch logic in execution-progress handler: when a worktree appears after task start, automatically switch the watcher to the worktree path - Add worktree fallback in exit handler for reading final plan state - Add getWatchedSpecDir() method to FileWatcher for path comparison Co-Authored-By: Claude Opus 4.6 <[email protected]>
…dling (#1805) Co-Authored-By: Claude Opus 4.6 <[email protected]>
…rror handling - Add pendingWatches guard in FileWatcher.watch() to prevent overlapping async calls from creating duplicate watchers (CodeRabbit critical finding) - Add .catch() to all three fire-and-forget fileWatcher.watch() calls in execution-handlers.ts to prevent unhandled promise rejections - Remove shadowed specsBaseDir re-declaration in autoRestart block, reusing the outer variable from the same TASK_RECOVER_STUCK handler scope
…variable shadowing - Change pendingWatches from Set<string> to Map<string, string> (taskId->specDir) so re-watch calls with a different specDir are allowed through instead of silently dropped - Add cancelledWatches Set to coordinate unwatch() with in-flight watch() calls, preventing watcher leaks when unwatch() runs during watch()'s await points - Add .catch() handler to fileWatcher.unwatch() call in agent-events-handlers exit handler, consistent with the .catch() pattern used for all watch() calls - Remove shadowed const mainSpecDir re-declaration inside autoRestart block in execution-handlers.ts, using the outer variable from the enclosing try block instead Co-Authored-By: Claude Opus 4.6 <[email protected]>
…ions - Add supersession check in watch() after awaiting existing watcher close to prevent a later concurrent call from having its watcher overwritten - Return early in unwatch() when a watch() is in-flight to prevent double-closing the same FSWatcher - Cancel in-flight watch() calls in unwatchAll() by marking their taskIds in cancelledWatches before closing existing watchers - Add .catch() to fileWatcher.unwatch() calls in TASK_STOP and TASK_RECOVER_STUCK handlers to surface errors instead of silently dropping them Co-Authored-By: Claude Opus 4.6 <[email protected]>
- Make finally block conditional so superseding watch() calls are not wiped out by the superseded call cleaning up pendingWatches - Delete watcher from map before awaiting close() to prevent concurrent calls from double-closing the same FSWatcher reference - Make cancelledWatches cleanup conditional on the call still owning the pendingWatches entry, preventing premature flag removal for concurrent calls - Fix misleading comment about mainSpecDir declaration scope Co-Authored-By: Claude Opus 4.6 <[email protected]>
…ards, and test coverage - Remove dead code in finally block: after delete(), has() is always false so the inner if was always true; simplify to a single delete + cancelledWatches.delete call (Finding 1) - Add implementation_plan.json existence check in getSpecDirForWatcher before preferring the worktree path, so the watcher is started in the correct directory even when the plan file hasn't been written yet (Finding 2) - Clear pendingWatches in unwatchAll() so in-flight watch() calls can no longer register new watchers after a full teardown (Finding 3) - Also clear cancelledWatches in unwatchAll() since in-flight calls bail via the supersession check and won't clean up the flags themselves - Add comprehensive concurrency tests for FileWatcher covering deduplication, supersession, cancellation, and unwatchAll behaviour (Finding 4) Co-Authored-By: Claude Opus 4.6 <[email protected]>
…bility Replace hardcoded forward-slash strings in getWatchedSpecDir assertions with path.join() so expected values match on Windows (backslash) and Unix (forward slash) alike. Co-Authored-By: Claude Opus 4.6 <[email protected]>
530df03 to
0660f16
Compare
The rebase on origin/develop introduced a duplicate `const specDir` declaration that caused TypeScript and Biome CI failures. The variable was already declared earlier in the same scope with the same value. Co-Authored-By: Claude Opus 4.6 <[email protected]>
Merged 9 upstream commits into fork's develop branch: - fix: PR review error visibility and gh CLI resolution (AndyMik90#1661) - fix: handle empty/greenfield projects in spec creation (AndyMik90#1426/AndyMik90#1841) - fix: clear terminalEventSeen on task restart (AndyMik90#1828/AndyMik90#1840) - fix: watch worktree path for implementation_plan.json (AndyMik90#1805/AndyMik90#1842) - fix: resolve Claude CLI not found on Windows (AndyMik90#1661/AndyMik90#1843) - fix: handle planning phase crash and resume recovery (AndyMik90#1562/AndyMik90#1844) - fix: show dismissed PR review findings in UI (AndyMik90#1852) - fix: preserve file/line info in PR review extraction (AndyMik90#1857) - chore: bump version to 2.7.6-beta.6 Conflict resolution strategy (5 files): - client.py: adopted upstream's CLAUDE.md truncation (better than skip) - agent-manager.ts: combined upstream projectPath cwd fix with fork's accountOverride - agent-process.ts: combined upstream PATH merge fix with fork's Copilot env handling - file-watcher.ts: adopted upstream's race-condition-safe watcher implementation - execution-handlers.ts: combined upstream plan file checking with fork's broken plan recovery + accountOverride
Summary
Fixes #1805 - Subtasks not generated/updated correctly since v2.7.6-beta.3
Root Cause
The
FileWatcherwas always watching the main project's spec directory forimplementation_plan.jsonchanges. When tasks run in a worktree (the default since v2.7.6-beta.3), the backend writes the plan file to the worktree's spec directory instead. TheFileWatchernever detected changes, soTASK_PROGRESSIPC events were never sent to the renderer, resulting in "0 subtasks" in the UI.Changes
file-watcher.ts: AddedgetWatchedSpecDir()method to check what path is currently being watchedexecution-handlers.ts: AddedgetSpecDirForWatcher()helper that checks worktree path first, falls back to main project path. Updated all 3 file watcher setup locations:TASK_STARThandlerTASK_UPDATE_STATUSauto-startTASK_RECOVER_STUCKauto-restartagent-events-handlers.ts:execution-progresshandler: when a worktree appears after task start (task started before worktree was created), automatically switches the watcher to the worktree pathexithandler for reading final plan state when the watcher was on the wrong pathEdge Cases Handled
Closes #1805
Summary by CodeRabbit
New Features
Bug Fixes